Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

نویسندگان

  • Ernesto G. Birgin
  • J. L. Gardenghi
  • José Mario Martínez
  • Sandra Augusta Santos
  • Philippe L. Toint
چکیده

The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order p (for p ≥ 1) and to assume Lipschitz continuity of the p-th derivative, then an -approximate first-order critical point can be computed in at most O( −(p+1)/p) evaluations of the problem’s objective function and its derivatives. This generalizes and subsumes results known for p = 1 and p = 2.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The unconstrained minimization of a sufficiently smooth objective function f(x) is considered, for which derivatives up to order p, p ≥ 2, are assumed to be available. An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order p and that is guaranteed to find a firstand second-order critical point in at most O (

متن کامل

On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming

We estimate the worst-case complexity of minimizing an unconstrained, nonconvex composite objective with a structured nonsmooth term by means of some first-order methods. We find that it is unaffected by the nonsmoothness of the objective in that a first-order trust-region or quadratic regularization method applied to it takes at most O(ǫ−2) function-evaluations to reduce the size of a first-or...

متن کامل

On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(ǫ) to drive the norm of the gradient below ǫ. This shows that the upper bound of O(ǫ) evaluations known for the steepest descent is tight, and that Newton’s method may be as slow a...

متن کامل

Partially separable convexly-constrained optimization with non-Lipschitzian singularities and its complexity

An adaptive regularization algorithm using high-order models is proposed for partially separable convexly constrained nonlinear optimization problems whose objective function contains non-Lipschitzian `q-norm regularization terms for q ∈ (0, 1). It is shown that the algorithm using an p-th order Taylor model for p odd needs in general at most O( −(p+1)/p) evaluations of the objective function a...

متن کامل

Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients

The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined threshold. The analysis cove...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Math. Program.

دوره 163  شماره 

صفحات  -

تاریخ انتشار 2017